패권: 누가 AI 전쟁의 승자가 될 것인가

  • 2025-08-20 (modified: 2025-10-15)
  • 출판일: 2024-09-10
  • 저자: Parmy Olson

샘 알트먼데미스 허사비스의 여정을 중심으로 2000년 이후 AI 개발 경쟁의 역사를 담아낸 책.

Prologue

지금까지 경험하지 못했던 속도:

저는 기술 산업에 대해 15년 동안 글을 써 왔지만 최근 2년 사이에 인공지능 분야가 변화한 만큼 빠르게 발전하는 분야를 본 적이 없습니다.

In the fifteen years that I’ve written about the technology industry, I’ve never seen a field move as quickly as Artificial intelligence has in just the last two years.

누구는 유토피아를 얘기하고 누구는 멸망을 얘기하지만, 이런 극단적 예측은 당면한 현실의 문제를 가릴 뿐이다:

많은 AI 개발자들은 이 기술이 유토피아로 가는 길을 약속한다고 말합니다. 다른 이들은 AI가 우리 문명의 붕괴를 초래할 수 있다고 주장하기도 합니다. 하지만 이러한 공상 과학 시나리오들은 현실에서 AI로 인해 야기되는 교묘한 문제들로부터 우리의 주의를 분산시켰습니다. AI는 인종차별을 영속시키고 전체 예술 산업을 위협하는 등의 문제를 일으키고 있습니다.

Many AI builders say this technology promises a path to utopia. Others say it could bring about the collapse of our civilization. In reality, the science fiction scenarios have distracted us from the more insidious ways AI is threatening to harm society by perpetuating racism, threatening entire creative industries, and more.

소수 기술 업계의 막강한 영향력:

이 보이지 않는 힘의 배후에는 AI 개발의 통제권을 장악하고 더 강력하게 만들기 위해 경쟁하는 기업들이 있습니다. 성장에 대한 끝없는 갈증에 이끌려 그들은 원칙을 무시하고 제품에 대해 대중을 오도하며, 스스로를 매우 의심스러운 AI 관리자의 길로 들어서게 만들었습니다.

역사상 오늘날의 기술 대기업만큼 많은 권력을 축적하거나 많은 사람에게 영향을 미친 조직은 없었습니다. 구글은 전 세계 인터넷 사용자의 90%에 대한 웹 검색을 수행하고, 마이크로소프트 소프트웨어는 컴퓨터를 사용하는 인류의 70%가 사용합니다. … AI의 미래는 단 두 사람, 샘 올트먼데미스 허사비스에 의해 쓰여졌습니다. … 올트먼 덕분에 세상은 챗GPT를 갖게 되었고, 허사비스 덕분에 우리는 그것을 그토록 빨리 얻을 수 있었습니다.

Behind this invisible force are companies that have grabbed control of AI’s development and raced to make it more powerful. Driven by an insatiable hunger to grow, they’ve cut corners and misled the public about their products, putting themselves on course to become highly questionable stewards of AI.

No other organizations in history have amassed so much power or touched so many people as today’s tech giants. Google conducts web searches for 90 percent of Earth’s internet users, and Microsoft software is used by 70 percent of humans with a computer. … AI future has been written by just two men: Sam Altman and Demis Hassabis. … Altman was the reason the world got ChatGPT. Hassabis was the reason we got it so quickly.

독과점이 야기하는 문제:

AI 분야의 권력 집중은 경쟁을 감소시키고 사생활에 대한 새로운 침해와 새로운 형태의 인종적 편견 및 성 편견을 초래할 것입니다. 이미 오늘날, 인기 있는 AI 도구에 여성 이미지를 생성해 달라고 요청하면 섹시하고 노출이 심한 옷을 입은 모습으로 만들고, 사실적인 CEO 이미지를 요청하면 백인 남성 이미지를 생성하며, 범죄자를 요청하면 종종 흑인 남성 이미지를 생성할 것입니다.

The concentration of power in AI would lead to reduced competition and herald new intrusions into private life and new forms of racial and gender prejudice. Already today, if you ask a popular AI tool to generate images of women, it’ll make them sexy and scantily clad; ask it for photorealistic CEOs, and it’ll generate images of white men; ask for a criminal, and it will often generate images of Black men.

에디슨 vs. 웨스팅하우스 사례와의 유사성:

두 사람의 여정은 2세기 전 에디슨웨스팅하우스라는 두 기업가가 전쟁을 벌였던 것과 크게 다르지 않았습니다. … 결국 웨스팅하우스의 더 효율적인 전기 표준이 세계에서 가장 대중화되었지만, 그는 소위 ‘전류 전쟁’에서 승리하지 못했습니다. 승자는 제너럴 일렉트릭이었습니다.

기업의 이해관계가 올트먼허사비스로 하여금 더 크고 강력한 모델을 내놓도록 압박하면서 거대 기술 기업들이 승자로 부상했으며, 이번 경쟁의 목표는 바로 우리 자신의 지능을 복제하는 것이었습니다.

The pair’s journey was not all that different from one two centruies ago, when two entrepreneurs named Thomas Edison and George Westinghouse went to war. … In the end, Westinghouse’s more efficient electrical standard became the most popular in the world. But he didn’t win the so-called War of the Currents. General Electric did.

As corporate interests pushed Altman and Hassabis to unleash bigger and more powerful models, it was the tech titans who came out as the winners, only this time the race was to replicate our own intelligence.

책의 구성:

이 책의 후반부에서는 그러한 위험들을 설명하지만, 우선 우리가 어떻게 여기까지 오게 되었는지, 그리고 선한 의도로 AI를 만들려 했던 두 혁신가의 비전이 어떻게 결국 독점의 힘에 의해 짓밟혔는지를 설명하겠습니다.

The second half of this book lays out those rissks, but first I’ll explain how we got here, and how the visions of two innovators who tried to build AI for good were eventually ground down by the forces of monopoly.

Act 1. The Dream

Chapter 1. High School Hero

샘 올트만의 어린 시절과 초반 경력

Chapter 2. Winning, Winning, Winning

데미스 허사비스의 어린 시절과 초반 경력

Chapter 3. Save the Humans

샘 올트만의 어린 시절과 초반 경력

Chapter 4. A Better Brain

데미스 허사비스의 어린 시절과 초반 경력

Chapter 5. For Utopia, for Money

Google과 Google Brain:

하사비스런던에서 컴퓨터를 열었을 때, 그는 구글의 CEO인 래리 페이지를 만나자는 초대장을 보았습니다. … “그는 1998년 차고에 있을 때부터 항상 구글을 AI 회사로 생각했다고 말했습니다.”

페이지의 아버지가 1996년 사망하기 전까지 AI와 컴퓨터 과학 교수였기 때문에, 이는 부분적으로 개인적인 것이기도 했습니다. … 당시 하사비스는 알지 못했던 페이지의 프로젝트는 구글 브레인이라고 불렸습니다. 이는 구글 내부에서 더 발전된 AI 시스템을 구축하고자 했던 부드러운 목소리의 스탠포드 대학교 교수, 앤드류 응의 제안으로 시작되었습니다. …

페이지는 그 아이디어를 마음에 들어 했고 승인했으며, 응을 영입하여 구글의 최첨단 AI 연구 프로젝트를 이끌게 했습니다. … 그러나 몇 년 후, 구글 브레인은 범용 인공지능을 구축하는 방향으로 나아가는 것처럼 보이지 않았습니다. 대신 구글의 타겟 광고 사업 개선에 기여하고 있었습니다. …

그런 의미에서, 응은 하사비스에게 큰 도움을 준 셈이었습니다. 구글 본사에 기반을 둠으로써 응의 연구는 이미 회사의 광고 사업에 기여하는 방향으로 진행되고 있었고, 덕분에 딥마인드는 즉시 그럴 필요가 없었습니다.

As Hassabis opened it up on his computer in London, he saw an invitation to meet with Larry Page, Google’s CEO. … “He told me that he always thought of Google as an AI company, even when he was in that garage in 1998.”

Partly it was personal, as Page’s father had been a professor of AI and computer science until his death in 1996. … Page’s project, which Hassabis didn’t know about at the time, was called Google Brain. It had come about as a proposal from Andrew Ng, a gentle-voiced Stanford university professor who wanted to build more advanced AI systems from inside Google. …

Page loved the idea and approved it, bringng Ng on board to lead Google’s most cutting-edge AI research project yet. … But a few years later, Google Brain didn’t look like it was on course to build AGI. Instead, it was helping Google improve its targeted advertising business. …

In that sense, Ng had done Hassabis a huge favor. By basing himself in the Google mothership, Ng’s research was already on track to contribute to the company’s ad business so that DeepMind didn’t immediately have to.

하사비스가 구글에 제안한 조건. 군용으로 쓰지 말 것, 윤리위원회를 둘 것:

하사비스는 매각에 두 가지 큰 조건이 있다고 말했습니다. 첫째, 그와 공동 창업자들은 구글이 딥마인드의 기술을 자율 주행 드론 조종, 무기, 또는 현장 군인 지원 등 어떠한 군사적 목적으로도 사용하지 않기를 원했습니다. … 둘째, 그들은 구글의 리더들이 소위 윤리 및 안전 협약에 서명하기를 원했습니다. …

(래리 페이지는) 인수 조건의 일부로 윤리위원회를 설치해 달라는 딥마인드의 요구에 동의했습니다.

Hassabis said he had two bing conditions for selling. First, he and his cofounders didn’t want Google to ever use DeepMind’s technology for military purposes, whether that was for steering autonomous drones or weapons or supporting soldiers in the field. … Second, they wanted Google’s leaders to sign what they called an ethic and safety agreement. …

(Larry Page) agreed to DeepMind’s demands for an ethics board as part of the acquisition.

트랜스휴머니즘과 우생학:

셰인 레그는 그의 전 동료들에 따르면, 수십 년에 걸쳐 형성된 이념을 포함하여 AGI 추구와 관련된 더 극단적인 이념들과 가장 뜻을 같이했습니다. 트랜스휴머니즘으로 알려진 이 사상은 논란의 여지가 있는 뿌리와 역사를 가지고 있었고, 이는 AI 개발자들이 때때로 기술의 불쾌하고 더 현재적인 부작용을 무시하는 이유를 설명하는 데 도움이 되었습니다. …

핵심 아이디어는 1940년대1960년대로 거슬러 올라가는데, 당시 줄리언 헉슬리라는 진화생물학자가 영국 우생학회에 가입하여 운영했습니다. 우생학 운동은 인류가 선택적 번식을 통해 스스로를 개선해야 한다고 제안했으며, 이는 영국 대학과 지식인 및 상류층 사이에서 번성했습니다. 헉슬리 자신은 귀족 가문 출신이었고(그의 형제 올더스멋진 신세계를 썼습니다), 그는 사회의 상류층이 유전적으로 우월하다고 믿었습니다. 하층 계급 사람들은 나쁜 작물처럼 솎아내고 강제 불임 수술을 받아야 했습니다. 헉슬리는 “(그들은) 너무 빨리 번식하고 있다”고 썼습니다.

나치가 우생학 운동을 받아들이자 헉슬리는 새로운 이름이 필요하다고 결정했습니다. 그는 한 에세이에서 적절한 번식과 더불어 인류가 과학과 기술을 통해 “스스로를 초월”할 수도 있다고 말하며 트랜스휴머니즘이라는 새로운 용어를 만들었습니다.

Shane Legg was most aligned with the more extreme ideologies linked to the pursuit of AGI, including one that had been decades in the making, according to his former colleagues. Known as transhumanism, the idea had controversial roots and a history that helped explain why AI’s builders sometimes neglected the nasty, more current side effects of the technology. …

The core idea stems back to the 1940s and 1960s when an evolutionary biologist named Julian Huxley joined and ran British Eugenics Society. The eugenics movement proposed that humans should improve themselves through selective breeding, and it flourished in British universities and among the country’s intellectual and upper classes. Huxley himself came from an aristocratic family (his brother Aldous wrote Brave new world), and he believed society’s upper crust was genetically superior. Lower-class people needed to be weeded out like a bad crop and subjected to forced sterilization. “(They) are reproducing too fast,” Huxley wrote.

When the Nazis latched on to the eugenics movement, Huxley decided it needed a rebrand. He conined a new term, transhumanism, in an essay saying that alongside proper breeding, humanity could also “transcend itself” through science and technology.

가속주의자, 멸망론자, 그리고 돈:

이런 생각들의 문제점 중 일부는 수년에 걸쳐 추종자들이 점점 더 열성적으로 변했다는 것입니다. 예를 들어, 소위 AI 가속주의자들은 과학자들이 포스트휴먼 낙원, 즉 너드들을 위한 일종의 황홀경을 만들기 위해 가능한 한 빨리 AGI를 구축해야 할 도덕적 의무가 있다고 믿습니다. 만약 그것이 그들의 생애 안에 만들어진다면, 그들은 영원히 살 수 있을 것입니다. 하지만 AI 개발 속도를 높이는 것은 특정 집단의 사람들에게 해를 끼치거나 통제 불능 상태에 빠질 수 있는 기술을 만들면서 절차를 생략하는 것을 의미할 수도 있었습니다.

바로 이 지점에서 다른 사람들은 AI가 멈춰야 할 미래의 악마 같은 존재라고 믿으며 반대 입장을 취했습니다. 커피를 마시며 얀 탈린을 급진화시킨 수염 난 자유지상주의자 엘리저 유드코프스키는 그 이념 운동의 주도적인 인물이었고, 그는 자신의 사이트 LessWrong을 통해 그 운동에 점점 더 큰 추진력을 부여했습니다. … 레스롱은 AI 종말론 공포에 대한 인터넷에서 가장 영향력 있는 허브가 되었고, 일부 언론 보도는 그것이 현대 종말론 사이비 종교의 모든 특징을 가지고 있다고 지적했습니다. …

그러나 아마도 AGI를 둘러싸고 스며들기 시작한 가장 충격적인 이데올로기는 디지털 형태의 거의 완벽한 인류를 만드는 데 초점을 맞춘 것들이었을 것입니다. 이 아이디어는 부분적으로 닉 보스트롬슈퍼인텔리전스에 의해 대중화되었습니다. … 이러한 생각들은 올바른 알고리즘만 있다면 그런 환상적인 삶의 방식이 가능하다고 믿었던 실리콘 밸리의 일부 사람들에게는 거부할 수 없는 것이었습니다. …

이러한 현대 기술 이데올로기가 딥마인드구글의 협상과 맞물리면서, 냉혹한 진실이 드러나기 시작했습니다. AI에 대한 책임감 있는 관리 형태를 찾는 것은 기술 기업들에게 매우 힘든 일이 되어가고 있었습니다. 한쪽에서는 거의 종교적인 열정에 의해, 다른 한쪽에서는 상업적 성장에 대한 멈출 수 없는 갈증에 의해 서로 다른 목표들이 충돌할 궤도에 있었습니다.

The problem with some of these idea was that, over the years, their followers grew increasingly zealous. Some so-called AI accelerationists, for instance, believe that scientists have a moral imperative to work as quickly as possible to build AGI to create a posthuman paradise, a kind of rapture for nerds. If it was built in their lifetime, they could life forever. But speeding up AI’s development could also mean cutting corners and marking technology that harmed certain groups of people or that could spin out of control.

That’s where others took the opposite stance, believing that AI represented a kind of devil figure of the future that needed to be stopped. Eliezer Yudkowsky, the bearded libertarian who helped radicalize Jaan Tallinn over coffee, was a leading figure in that ideological movement, which he gave increasing momentum through his site LessWrong. … LessWrong had become the internet’s most influential hub for AI apocalypse fears, and some press reports pointed out that it had all the trappings of modern doomsday cult. …

But perhaps the most disturbing ideologies that were starting to percolate around AGI were those focused on creating a near-perfect human species in digital form. This idea was popularized in part by Nick Bostrom’s Superintelligence. … These ideas were irresistible to some people in Silicon Valley, who believed such fantastical ways of life were achievable with the right algorithms. …

As these modern-day technological ideologies coincided with DeepMind’s negotiations with Google, a hard truth was coming to bear. Figuring out a responsible form of wtewardship for AI was becoming fraught for tech companies. Different objectives were on track to crash into one another, driven by an almost religious zealotry on one side and an unstoppable hunger for commercial growth on the other.

협상 타결:

마침내 계약이 체결되고 인수 계약에 윤리 위원회가 추가되었을 때, 구글은 딥마인드를 6억 5천만 달러에 인수하고 있었습니다. … 이제 하사비스는 페이스북이나 아마존이 자신의 직원을 빼가는 것을 걱정하는 대신, 눈이 휘둥그레지는 연봉으로 그들의 직원을 빼내고 학계의 위대한 AI 인재들을 유혹할 수 있었습니다. … 그리고 이제 구글의 일원이 된 덕분에, 그들은 세계 최고의 슈퍼컴퓨터와 AI 모델 훈련을 위한 가장 많은 데이터에 접근할 수 있게 되었습니다.

When the deal was finally inked and the ethics board added to the acquisition agreement, Google was buying DeepMind for $650 millison. … Now instead of worrying about Facebook or Amazon poaching his staff, Hassabis could poach their staff and lure some of the greatest AI minds from academia with eye-popping salaries. … And now thanks to being part of Google, they had access to the world’s best supercomputers and the most data for training AI models too.

윤리위원회:

인수 약 1년 후, 딥마인드는 캘리포니아에 있는 SpaceX 본사 내 회의실에서 윤리 및 안전 위원회의 첫 회의를 소집했습니다. 데미스 하사비스, 무스타파 술레이만, 셰인 레그가 이사회에 있었고, 링크드인의 억만장자 공동창업자에서 벤처 캐피털 투자자로 변신한 일론 머스크리드 호프먼도 마찬가지였습니다. 회의에 정통한 사람들에 따르면, 첫 회의에 참석한 다른 남성들로는 래리 페이지, 구글 임원인 순다르 피차이, 구글의 법무 책임자 켄트 워커, 하사비스의 박사후 과정 지도교수 피터 다얀, 그리고 옥스퍼드 대학의 철학자 토비 오드가 있었습니다.

회의는 잘 진행되었지만, 그 후 창업자들은 구글로부터 놀라운 소식을 들었습니다. 결국 회사는 새로운 윤리 위원회가 진행되는 것을 원하지 않았습니다. … 당시 구글의 설명 중 일부는 이사회의 일부 핵심 구성원들이 이해 상충의 소지가 있다는 것이었습니다. 예를 들어, 머스크는 딥마인드 외부의 다른 AI 노력을 잠재적으로 지원하고 있었으며, 이사회를 설립하는 것이 법적으로 실현 가능하지 않다는 것이었습니다.

About a year after the acquisition, DeepMind convened its first meeting for the ethics and safety board at a conference room inside SpaceX’s headquarters in California. Hassabis, Suleyman, and Legg were on the board, and so were Elon Musk and Reid Hoffman, the billionaire cofounder of LinkedIn turned venture capital investor. The other men at that first meeting included Larry Page, Google executive Sundar Pichai, Google’s legal chief Kent Walker, Hassabis’s postdoc advisor Peter Dayan, and Oxford University philosopher Tody Ord, according to people with knowledge of the meeting.

The meeting went well, but then the founders got some surprising news from Google. The company didn’t want its new ethics board to go forward after all. … Part of Google’s explanation at the time was that some of the board’s key members had conflicts of interest - Musk was potentially backing other AI efforts outside of DeepMind, for instance - and establishing a board just wasn’t legally feasible.

알파벳:

딥마인드 창업자들은 당시에는 몰랐지만, 구글은 자사의 다양한 사업 부문이 더 독립적으로 운영될 수 있도록 하는 “알파벳”이라는 이름의 대기업으로 변모할 준비를 하고 있었습니다. … 그 아이디어는 유망하게 들렸습니다. …

겉으로 드러나지 않은 구글의 진짜 목표는 정체되어 있던 주가를 부양하는 것이었습니다. 수년 동안 월스트리트 분석가들은 유튜브, 안드로이드, 그리고 수익성 높은 검색 엔진 외에 구글의 다른 사업 묶음을 평가하는 데 어려움을 겪고 있었습니다. …

하지만 구글이 알파벳이라는 이름으로 구조조정을 발표했을 때, 딥마인드에 더 많은 법적 자율성을 부여할 계획을 확인하거나 발표하지 않았습니다.

The DeepMind founders didn’t know this at the time, but Google was preparing to turn itself into a conglomerate called “Alphabet,” which would allow its various business divisions to operate with more independence. … The idea sounded promising. …

Out of view, Google’s real goal was to boost its share price, which had been stagnating. For years, Wall Street analysts had been struggling to evaluate Google’s bundle of other businesses outside of YouTube, Android, and its lucrative search engine. …

But then, when Google announced that it was being restructured under the name Alphabet, it wouldn’t confirm or announce any plans to give DeepMind more legal autonomy.

OpenAI:

하사비스는 구글이 자신을 속여 넘기는 것처럼 보이는 방식에 대해 오래 생각할 시간이 많지 않았습니다. 더 골치 아픈 문제가 수평선 너머로 다가오고 있었습니다. 샌프란시스코에서는 일부 스타트업 창업자들이 딥마인드와 같은 목표를 가진 또 다른 연구소를 설립하고 있었습니다. … 설상가상으로, 이 새로운 조직은 그의 옛 투자자인 일론 머스크에 의해 설립되었습니다. 그 이름은 오픈AI였습니다.

Hassabis didn’t have much time to dwell on the way Google seemed to be fobbing him off. There was a more troubling matter coming up on the horizon. Over in San Francisco, some start-up founders were setting up another research lab that had the same goal as DeepMind’s. … What made things worse was that this new organization had been spun up by his old investor, Elon Musk. It was called OpenAI.

Chapter 6. The Mission

독점에서 경쟁으로:

딥마인드의 목표는 너무나 급진적이어서 사실상 독점처럼 운영될 수 있었습니다. … 그들의 탐구는 독보적이었습니다. 이제 실리콘 밸리에 경쟁자가 나타날 가능성이 그 모든 것을 바꿔놓을 것이었습니다. AGI를 구축하려는 탐구는 곧 경쟁으로 바뀌려 하고 있었습니다.

DeepMind’s objective was so radical that it could effectively operate like a monopoly. … Their quest was unique. Now the possibility of a rival in Silicon Valley was going to change all that. The quest to build AGI was about to turn into a race.

OpenAI에 대한 분노와 경쟁심:

데미스 하사비스오픈AI에 대해 알면 알수록 분노가 치밀어 올랐습니다. … 오픈AI는 웹사이트에 7명의 공동창업자를 명시하고 있었습니다. 하사비스가 그 이름들을 자세히 살펴보았을 때, 그는 그들 중 5명이 몇 달 동안 딥마인드에서 컨설턴트와 인턴으로 일했다는 것을 깨달았습니다….

이 5명의 전 방문자 중 한 명은 딥마인드의 대표 기술인 강화 학습이 아닌 딥 러닝을 전문으로 하는 저명한 AI 과학자 일리야 수츠케버였습니다. 수츠케버는 오픈AI의 수석 과학자였으며, 공동창업자들과 마찬가지로 AGI의 가능성을 깊이 신뢰하는 인물이었습니다.

The more Hassabis learned about OpenAI, the more his anger rose. … OpenAI had seven people listed as cofounders on it website. When Hassabis took a closer look at the names, he realized that five of them had worked as consultants and interns at DeepMind for several months….

One of those five former visitors was a renowned AI scientist namaed Ilya Sutskever, who specialized in deep learning, not DeepMind’s signature technique, reinforcement learning. Sutskever was OpenAI’s chief scientist and, like this cofounders, a deep believer in the possibilities of AGI.

OpenAI의 “공개” 전략에 대한 하사비스의 의문:

하사비스는 기술을 대중에게 공개하겠다는 오픈AI의 약속에 의문을 제기했습니다. … “점점 더 강력한 이중 목적 기술을 갖게 될 때, 악의적인 행위자들이 나쁜 목적으로 그 기술에 접근하는 것은 어떻게 할 것인가? … 누군가가 무엇을 할지에 대해 당신이 통제할 수 있는 것은 매우 제한적이다.”

Hassabis quenstioned OpenAI’s promises to release its technology to the public. … “As you get more and more powerful dual-purpose technoligies, what about bad actors accessing that technology, for bad ends? … You have very limited control over what somebody might do.”

하사비스를 욕하고 다니는 일론 머스크:

딥마인드와 오픈AI에서 근무했던 사람들에 따르면, 굴욕감을 더 깊게 만든 것은 머스크가 실리콘 밸리의 지인들에게 하사비스를 험담하고 다닌다는 소식을 딥마인드 경영진이 듣게 된 것이었습니다.

Deepening the humiliation, DeepMind leaders caught wind that Musk was trash-talking Hassabis to his contacts in Silicon Valley, according to people who worked at DeepMind and OpenAI.

AI로 인한 인류 멸망론에 점점 더 빠져들면서도 돈을 벌고 싶은 일론:

머스크는 AI 파멸론의 토끼굴에 빠져들면서 그 문제에 더 많은 돈과 시간을 투자하기 시작했습니다. 그는 AI를 통한 인류 절멸을 막기 위한 더 많은 연구를 촉구하는 캠페인을 벌이는 비영리 단체인 생명의 미래 연구소에 1,000만 달러를 기부했습니다. …

하지만 다른 한편으로 그는 실리콘 밸리에서 어디에 돈을 투자할지에 대한 가장 큰 결정들의 일부를 부추기는 쇠약하게 만드는 “소외되는 것에 대한 두려움”, 즉 FOMO를 겪고 있었습니다. … 구글딥마인드를 인수했을 뿐만 아니라, 마크 저커버그페이스북 AI 연구소라는 새로운 부서를 설립하고 딥 러닝 분야의 세계적인 전문가 중 한 명인 얀 르쿤을 고용하여 운영하게 했습니다.

As Musk went down the rabbit hole of AI doom, he started investing more of his money and time in the issue. He gave $10 million to the Future of Life Institute, a nonprofit organization that campaigned for more research into stopping human annihilation through AI. …

But on the other hand, he was also experiencing FOMO, the debilitating “fear of missing out” that fuels some of the biggest decisions in Silicon Valley about where to put money. … Not only had Google bought DeepMind, but Mark Zuckerberg had set up a new division called Facebook AI Research, or FAIR, and hired one of the world’s leading specialists in deep learning, Yann LeCun, to run it.

데미스 허사비스샘 올트먼의 차이:

하사비스는 AGI가 과학과 신성의 신비를 풀 것이라고 믿었던 반면, 올트먼은 그것을 세계의 재정적 풍요로 가는 길로 보았다고 말하곤 했습니다.

While Hassabis had believed that AGI would unlock the mysteries of science and the divine, Altman would say he saw it as the route to financial abundance for the world.

OpenAI의 “개방성”의 가치에 공감하여 합류한 초기 멤버들:

브록먼은 이후 구글페이스북 같은 회사에서 재능 있는 과학자들로 구성된 초기 그룹을 빼내오는 책임을 맡았습니다. … 거물급 인사들과 비전과 더불어, 그들은 이 새로운 조직의 “개방적인” 부분을 마음에 들어 했습니다.

Brockman then took charge of poaching an initial group of talented scientists from companies like Google and Facebook … . Alongside the big names and vision, they liked the “open” part of this new organization.

출범:

브록먼수츠케버가 프로젝트를 소개하는 블로그 글을 올린 웹사이트 openai.com이 등장했습니다. 그들은 “우리의 목표는 재정적 수익을 창출해야 한다는 제약 없이 인류 전체에 가장 이로울 가능성이 높은 방식으로 디지털 지능을 발전시키는 것”이라고 썼습니다.

A website, openai.com popped up with a blog post written by Brockman and Sutskever introducing the project. “Our goal is to advance digital intelligence in the way that is most likely to benefit humantity as a whole, unconstrained by a need to generate financial return,” they wrote.

한편, 기업의 영입 경쟁으로 인해 인재를 잃어가는 학계:

대학의 두뇌 유출은 두 가지 이유로 일어나고 있었습니다. 첫 번째이자 가장 명백한 이유는 급여였습니다. … 두 번째 이유는 AI 연구에서 실험을 실행하는 데 필요한 데이터와 컴퓨팅 파워였습니다.

The university brain drain was happening for two reasons. The first and most obvious was pay. … A second reason was the data and computing power needed to run experiements in AI research.

스케일링 법칙:

AI를 더 똑똑하게 만드는 데 있어서는, 많을수록 더 좋았습니다. … 만약 AI 모델을 점점 더 많은 데이터로 훈련시키고, 모델이 가진 매개변수의 수를 늘리고, 또한 훈련에 사용되는 컴퓨팅 파워를 증강시키면, AI 모델은 더욱 능숙해질 것이었습니다.

When it came to making AI smarter, more was better. … If you trained an AI model with more and more data, and you also raised the number of parameters the model had, and you also boosted the computing power used for training, the AI model would become more proficient.

약속을 안지키는 일론:

문제가 발생하기 시작하는 데는 오랜 시간이 걸리지 않았습니다. 오픈AI는 12월에 발표했던 머스크, 피터 틸 등으로부터의 10억 달러 자금 지원 약속을 즉시 받지 못했습니다. 사실, 오픈AI의 연방 세금 신고서를 면밀히 조사한 기술 뉴스 사이트 테크크런치의 조사에 따르면, 이 비영리 단체는 이후 몇 년 동안 실제 기부금으로 1억 3천만 달러를 약간 넘는 금액을 모으는 데 그쳤습니다.

It wasn’t long before problems started to arise. OpenAI didn’t immediately get the $1 billion in funding commitments it had announced in December, from Musk, Thiel, and others. In fact, over the next few years, the nonprofit managed to collect only a little over $130 million in actual donations, according to an investigation by the tech news site TechCrunch, which pored over OpenAI’s federal tax filings.

다리오 아모데이:

출범 몇 달 후, 그들은 다리오 아모데이라는 또 다른 존경받는 구글 브레인 연구원의 방문을 받았습니다. … 아모데이는 머스크나 엘리저와 비슷한 파멸의 공포를 느끼는 과학자 집단의 일원이었습니다. 그는 1년도 채 되기 전에 구글 포토 앱의 시각 인식 시스템이 유색인종을 고릴라로 분류하는 것이 발견되어 회사가 비난을 받았을 때 구글에서 일하고 있었습니다.

A few months after the launch, they got a visit from another respected Google Brain research named Dario Amodei. … Amodei was part of the growing cohort of scientists who had similar fears of doom to Musk and Eliezer. He’s been working at Google when, less than a year earlier, the company came under fire after the vision recognition system in its Photos app was spotted classifying people of color as gorillas.

허사비스가 싫어서 OpenAI에 투자했다고 공공연히 말하고 다니는 일론 머스크 (그나마도 실제로 투자한 액수는 약속한 금액의 5~10%):

어느 순간, 머스크는 자신이 왜 오픈AI에 자금을 지원했는지에 대해 이야기하기 시작했는데, 그 이유는 데미스 하사비스였습니다.

그곳에 있던 한 사람에 따르면 머스크는 “나는 딥마인드의 투자자 중 한 명이었고, 래리가 데미스가 자신을 위해 일한다고 생각하는 것이 매우 우려스러웠습니다. 사실 데미스는 자기 자신을 위해 일할 뿐입니다”라고 말했습니다. “그리고 나는 데미스를 신뢰하지 않습니다.”

At one point, Musk started talking about why he’d funded OpenAI, and the reason was Demis Hassabis.

“I was one of the investors in DeepMind, and I was very concerned that Larry thinks Demis works for him. Actually, Demis just works for himself,” Musk said, according to a person who was there. “And I don’t trust Demis.”

일론 머스크의 조바심, 결국 헤어지는 일론과 올트먼:

전 오픈AI 직원들에 따르면, 몇 달이 지나면서 머스크는 오픈AI의 기술이 딥마인드만큼 강력하지 않다는 점을 점점 더 우려하게 되었습니다. … 그는 올트먼에게 인상적인 과학자들을 영입했지만 딥마인드를 압도할 만한 데모가 없다고 불평했습니다. … 그는 빠른 해결책을 제안했습니다: 자신이 오픈AI를 장악하고 테슬라와 합병하겠다는 것이었습니다. … 하지만 올트먼과 공동창업자들은 계속 통제권을 유지하고 싶었습니다. 그들은 머스크의 제안을 거절했습니다.

2018년 2월, 오픈AI는 새로운 기부자에 대한 공개 발표에서 머스크가 떠난다고 간략하게 언급했지만, 그 이유는 좋은 쪽으로 포장했습니다. 머스크는 윤리적인 이유로 떠난다는 것이었습니다. 그는 AI 분야에서 너무 큰 이해 상충을 가지고 있었습니다.

As the months passed, Musk grew more and more concerned that OpenAI’s technology simply wasn’t as powerful as DeepMind’s, according to former OpenAI staff. … He complained to Altman that he had recruited an impressive roster of scientists but didn’t have any demos that blew DeepMind out of the water. … He then offered a quick solution: he would take control of OpenAI and merge it with Tesla. … But Altman and his cofounders wanted to stay in control. They rejected Musk’s proposal.

In February 2018, OpenAI briefly memtioned in a public announcement about new donors that Musk was leaving, but it framed the reason as benign. Musk was leaving for ethical reasons. He had too big conflict of interest in the field of AI.

인류의 미래가 걱정된다고 말하지만 DeepMind와의 경쟁에만 혈안이 되어 있는 일론:

머스크가 만성적으로 신뢰할 수 없는 인물이라는 점은 분명했습니다. … 인류의 미래에 대한 머스크의 모든 우려에도 불구하고, 그는 경쟁에서 앞서나가는 데 훨씬 더 사로잡혀 있는 것처럼 보였습니다.

It was clear that Musk was also chronically unreliable. … For all the concerns Musk had about humanity’s future, he seemed far more preoccupied with staying ahead of the competition.

영리 기업으로의 “피봇”을 시도하는 샘 올트먼. AI 분야가 “학문적 경쟁”에서 “서부 시대”처럼 바뀌기 시작:

올트먼과 머스크는 오픈AI를 비영리 단체로 설립하고, 만약 자신들이 초지능 기계의 문턱에 가까워지는 것처럼 보이면 연구 결과와 특허까지도 다른 조직과 공유하겠다고 약속했습니다. …

이제 올트먼은 살아남기 위해 분투하면서 그러한 안전장치 중 일부를 허물어뜨릴 것이었습니다. 그가 처음 시작했던 신중한 접근 방식은 더 무모한 것으로 변모할 것이었고, 그렇게 함으로써 그와 딥마인드가 일해왔던 AI 분야를 느리고 대체로 학문적인 추구에서 서부 시대와 더 비슷한 것으로 바꾸어 놓을 것이었습니다. … 그는 기술 창업자였고, 기술 창업자들은 때때로 방향을 전환해야 했습니다. 그것이 실리콘 밸리에서 일이 돌아가는 방식이었습니다. 그는 오픈AI의 설립 원칙 중 일부를 아주 약간만 수정하면 될 것이었습니다.

Altman and Musk had established OpenAI as a nonprofit and promised to share its research and even its patents with other organizations if it looked like they were getting closer to the threshold of superintelligent machines. …

Now as Altman fought to stay alive, he was going to knock down some of those guardrails. The cautious approach he’d started with was going to morph into something more reckless, and doing so would transform the AI field that he and DeepMind had been working in from a slow and largely academic pursuit into something more like the Wild West. … He was a tech founder, and tech founders had to pivot sometimes. That was how it worked in Silicon Valley. He would only need to tweak some of OpenAI’s founding principles - just a little bit.

Act 2. The Leviathans

Chapter 7. Playing Games

50%는 구글 관련 업무, 나머지 50%는 독립 연구:

Suleyman wanted to show that DeepMind could stand on its own two feet as a business, so he dove into proving out hte value of its AI systems in the real world. He put renewed focus on a division he’d started called Applied, whose researchers used reinforcement learning techqniues to tackle problems in healthcare, energy, and robotics to potentially turn into businesses. Another team of about twenty researchers, who called themselves DeepMind for Google, worked on projects that directly helped Google’s business, making YouTube’s recommendations more efficient, for instance, or improving Google’s ad targeting algorithms. Google agreeed to give DeepMind 50 percent of the proceeds of the value that it added to those features, according to someone with knowledge of those agreements. About two-thirds of the projects ended up being useful to Google, another former staffer says.

AI 안전성에 대한 허사비스의 지나치게 추상적인 계획, 슐레이만과의 의견 차이:

Hassabis’s brain would grapple for solutions, but they sometimes sounded a little off the wall. For instance, he’d suggest that as their AI got more powerful and potentially dangerous, DeepMind could hire Terence Tao, a professor at University of California, Los Angeles, who was widely regarded as one of the world’s greatest living mathematicians. … Tao had said in interviews that AI was largely clever mathematics and that the world would probably never have true AI. He saw that technology in the same mechanistic and almost black-and-white way that Hassabis did. If AI got out of control, math could contain it. …

Suleyman disagreed with his cofounder’s approach, believing it far too focused on numbers and theory. He believed AI needed to be managed by people, not just clever math, to make it safe.

구글 본사의 새로운 제안 - 부분적인 스핀아웃:

They (Google) now suggested a third option: DeepMind could do a kind of partial spinout and have its own board of trustees guiding its creation of superintelligent AI, but Alphabet would retain some ownership of the AI company. To show they meant it, Alphabet put the commitment in writing. … An agreement in writing holds more weight than a spoken one, and the DeepMind founders believed Google’s pledge to set them free was real time time.

DeepMind를 UN과 비슷한 “Global Interest Company” 구조로 만들려는 계획:

After consulting with legal experts, DeepMind decided it would not go down the same route that Sam Altman initially had by becoming a non-profile organization. Instead, its founders contrived a completely new legal structure they called a global interested company or GIC. The idea was that DeepMind would become an organization that was more like a division of the United Nations, a transparent and responsible steward of AI for humanity’s sake.

중국 시장에 대한 구글의 야망:

As Google’s business in the United States and other Western markets matured, China presented a unique opportunity. … But Google couldn’t just waltz into China. In fact, in 2010 it had exited the country after accusing Beijing of hacking its intellectual property and the Gmail accounts of Chinese human right activists. … Google’s leadership were cocky and believed this was all just temporary, because China’s citizens would soon enough be clamoring for the slick, powerful services offered by Silicon Valley’s web giants. …

Schmidt was wrong. Instead of wasting away, China’s own internet sector boomed. Companies like Meituan, Baidu, and Alibaba became jugernauts as Chinese engineers who’d worked and started companies in Silicon Valley flew back home to build their own tech leviathans. …

알파고를 이용한 마케팅:

Then came a public relations opportunity that would put DeepMind at center stage. DeepMind had been training its AI models with games, and its latest program, AlphaGo, could play the two-player abstract strategy board game of Go. …

He (Demis Hassabis) understood that if AlphaGo could beat a global champion of Go in the same way IBM’s Deep Blue computer had beaten chess’s Garry Kasparov in 1997, it would create a thrilling new milestone for AI and cement DeepMind’s credibility as a leader in the field. DeepMind had its sights on South Korea’s Lee Sedol and challenged him to a five-game match in Seoul in March 2016. …

Google wanted DeepMind to put AlphaGo in front of an even more advanced player, Ke Jie, a nineteen-year-old ranked as the world’s number-one Go player at the time and who was based in China. …

The situation worried Hassabis, according to former DeepMind staff. If AlphaGo won, it would look like the big bad AI was out to beat humans again and again. If they lost, then all the hype they’d generated in South Korea would be wipde out. It seemed like a lost cause either way. … Hassabis used his strategic prowess to work out a compromise with Pichai: they’d do another match, but this time they’d use a new version of AlphaGo called AlphaGo Master. Instead of running on hundreds of different computers, it would run on just one machine powered by a Google chip. This way, they could frame the match as a test of their new AI system rather than another attempt to crush human champions. If the system lost, they could save face by saying it wasn’t comparable to the original AlphaGo, but if it won, they could herald a new, more powerful system. … Pichai agreed. …

The new AlphaGo won all three games against Ke Jie, and hardly anyone in China knew.

“Don’t be Evil”을 서서히 포기하는 구글:

Google was so desperate to get back into the Chinese market that it also reversed some of its previous resistance to Beijing’s demands on censorship and even surveillance. … Google executives had ordered its engineers to work on a prototype search engine for China codenamed Dragonfly, which blacklisted certain search terms and linked people’s searches to their mobile numbers. …

Chinese technology firms were making big strides on AI research. They didn’t really need TensorFlow - or Google, for that matter. The Chinese internet giant Baidu had even poached Andrew Ng, the Stanford professor who’d started Google Brain, from Google a year earlier. …

By creating a storm of positive publicity for DeepMind and showcasing its advanced AI, he’d made the lab look even more useful to Alphabet. …

The US Department of Defense had launched what it called Project Maven in 2017 to try to use more AI and machine learning in its defense strategies, for instance by giving its drones computer vision to get better at targeting weapons. When Google got involved, it was expecting to make $250 million a year from the partnership, according to emails leaked to The Intercept. Massive internal protests prompted Google to shut the project down and decline to renew its contract with the Defense Department, and it validated DeepMind’s worries about its AI being misused.

윤리나 안전 문제에 대한 투자가 턱없이 부족한 상황:

In 2020, for instance, most of DeepMind’s roughly one thousand staff members were made up of research scientists and engineers, while fewer than a dozen were researching ethics and just two were working at a PhD level doing academic research on the issue.

AI ethicsAI safety의 차이:

In AI, “ethics” and “safety” can refer to different research goals, and in recent years, their proponents have been at odds with one another. Researchers who say they work in AI safety tend to swim in the same waters as Yudkowsky and Jaan Tallinn and want to ensure that a superintelligent AGI system won’t cause catastrophic harm to people in the future, for instance by using drug discovery to build chemical weapons and wiping them out or by spreading misinformation across the internet to completely destabilize society.

Ethics research, on the other hand, focuses more on shaping how AI systems are designed and used today. They study how the technology might already be harming people.

딥마인드는 정말 AI 윤리나 안전 문제에 진심인가?

If DeepMind wasn’t putting its money where its mouth was on ethics, that raised questions about why the founders were so keen to spin out from Google in the first place. Did they really care about preventing their technology from doing harm, or were they feeding a more personal instinct to maintain control?

초심을 잃어가는 기업들:

In much the same way Google had started with a “don’t be evil” motto, DeepMind’s founders had kicked off their life under Google with good intentions. They’d left $150 million on the table with Facebook to keep an ethics board. But years later, they seemed to be prioritizing performance and prestige over ethics and safety. …

All of this raised a bigger question. Could you even do meaningful work on ethical AI from inside a large corporation? The answer came from inside Google itself. It was a resounding no.

Chapter 8. Everything is Awesome

윤리적 AI 개발이 어려운 근본적인 원인은 역사상 유례없이 비대해진 기업 + 지나친 독과점.

To understand why it became so maddeningly difficult to design ethical AI systems at Google, or to even turn innovative ideas into products at the company, you have to step back and look at some numbers. At the time of writing, Google’s parent company, Alphabet Inc., had a market capitalization of $1.8 trillion. …

Looking back in history, the companies that we once thought of as giants also pale in comparison to those of today. At its peak before being broken up in 1984, AT&T had a market capitalization of around $60 billion in 1984 dollars, or about $150 billion in today’s money. General Electric’s highest market cap was about $600 billion in 2000.

Even the market dominance of tech giants is unparalleled. Before regulators broke it up in 1911, Standard Oil controlled about 90 percent of oil business in the United States. Today, Google controls about 92 percent of the search engine market - globally. … No government or empire in history has touched so many people at once. …

We have no historical reference point for what happens when companies become this big.

확률적 앵무새 논문의 오리진 스토리.

Chapter 9. The Goliath Paradox

너무 커진 나머지 지나치게 둔하고 보수적으로 변해버린 구글:

In 2017, Google had about eighty thousand salaried employees. … The problem with being so big was that if someone did invent someting groundbreaking inside Google, it might struggle to see the light of day. Google’s digital ad business was sacrosanct. You didn’t mess with the algorithms that powered it unless you really had to.

트랜스포머 아키텍쳐의 시작. 참고: 트랜스포머 아키텍쳐의 오리진 스토리

Act 3. The Bills

Chapter 10. Size Matters

2017년, 올트먼허사비스가 만날 뻔 했으나 올트만이 허사비스를 싫어해서 대신 무스타파 슐레이만을 만남:

In 2017, both Altman and Hassabis took part in an AI safety conference in California, set up by the Future of Life Institute. Hoffman (OpenAI’s board member) was there, and afterward, he tried to set up a dinner between the American start-up guru and the British neuroscientist. Altman didn’t like the idea, arguing that Hassabis was uncooperative and seemingly unconcerned about the existential risks of AI that Altman was trying to prevent. So Hoffman brought Mustafa Suleyman instead.

GPT의 오리진 스토리.

OpenAI Charter:

Knowing that they needed to rethink their strategy, the founding team started working on an internal document about the path to AGI. In April 2018, they published what they called a new charter on their website. It was a mix of outsized goals and pledges - and a hint about how the non-profit was about to make the mother of all U-turns. …

The whole thing sounded magnanimous. OpenAI was framing itself as an organization that was so highly evolved that it was putting the interests of humanity above traditional Silicon Valley pursuits like profit and even prestige. …

But reading between the lines, it also looked like Altman and Brockman were preparing to abandon OpenAI’s founding principles. Three years, when they launched the nonprofit, they said that OpenAI’s research would be “free from financial obligations.” Now OpenAI’s charter mentioned, in passing, that it would actually need a lot of money: “We anticipate needing to marshal substantial resources to fulfill our mission,” they wrote, “but [we] will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

마이크로소프트와의 전략적 제휴:

(Sam Altman) didn’t want to lose complete control of OpenAI by selling it to a larger tech company - as DeepMind had done to Google. But a strategic partnership could create the illusion of greater independence from a larger tech company, while giving him the computing power OpenAI needed. … Microsoft quickly came up as an obvious choice. Both Hoffman and Altman had personal connections to the company. They both personally knew Microsoft’s CTO, Kevin Scott, and Hoffman was close to Microsoft’s CEO, Satya Nadella. …

As OpenAI got closer to running out of cash, Nadella was four years into his attempt to transform Microsoft. …

Altman wasn’t promising to help him make a better Excel spreadsheet. He wanted to bring abundance to humanity. And Nadella was impressed by what Altman’s small team had already accomplished, particularly with LLMs. …

Nadella realized that the real return on $1 billion investment in OpenAI wasn’t going to come from the money after a sale or stock market floatation. It was the technology itself. OpenAI was building AI systems that could one day lead to AGI, but along the way, as those systems became more powerful, they could make Azure a more attractive service to customers. Artificial intelligence was going to become a fundamental part of the cloud business, and cloud was on track to make up half of Microsoft’s annual sales. If Microsoft could sell some cool new AI features - like chatbots that could replace call center workers - to its corporate customers, those customers were less likely to leave for a competitor. The more features they signed up for, the harder it would be to switch.

Better language models and their implications:

They decided to release a smaller version of the model, warning in a blog post in February 2019 that it could be used to generate misinformation on a large scale. … “Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” the post said. The announcement itself was more about the risks than the model itself. Its title was “Better Language Models and Their Implications.” …

More people than ever wanted to hear about it.

Altman and Brockman would go on to say that this was never their intention and that OpenAI was genuinely concerned about how GPT-2 could be abused. But their approach to public relations was, arguably, still a form of mystique marketing with a dash of reverse psychology.

기술 컬트:

For those who worked at OpenAI, and at DeepMind, too - the relentless focus on saving the world with AGI was gradually creating a more extreme, almost cultlike environment. In the San Francisco headquarters of OpenAI, Sutskever was fashioning himself as a kind of spiritual leader.

효율적 이타주의:

Effective altruism hit the spotlight in late 2022 when one-time crypto billionarie Sam Bankman-Fried became the movement’s most well-known supporter. … Instead of vounteering at a homeless shelter, for instance, you could help more people by working at a high-paying job like a hedge fund, making lots of money, and then giving that money to build several more homeless shelters. The concept was known as “earning to give,” and the goal was to get as much bang for your charitable buck as possible. …

Nick Beckstead, a program officer with effective altruism’s biggest charitable backer, Open Philanthropy, once wrote that “saving a life a rich country is substantially more important than saving a life in a poor country because richer countries have more innovation, and their workers are more economically productive.” Human life was quantifiable, and doing good was a mathematical problem that needed teasing out.

The mission of building AGI had a particular appeal to anyone who believed in effective altruism’s higher-numbers-are-better philosophy, because you were building technology that could impact billions or event trillions of lives in the future. (참고: Longtermism)

Capped profit company. 일반적 영리기업과 “B Corp” 사이의 어딘가:

Companies that try to make the world a better place and earn profits sometimes structure themselves as B Corps, or benefit corporations. It’s a legal alternative to the for-profit model that most other firms fall under, in which the primary objective is to maximize shareholder value. American economist Milton Friedman best summed up this more popular approach in 1962: “There is one and only social responsibility of business - to use its resources and engage in activities designed to increase its profits.” (Friedman doctrine)

The B Corp is designed to balance profit seeking with a mission. Puffer-jacket maker Patagonia and Ben & Jerry’s both have the model, which means that whenever they make a decision, they are legally required to analyze its impact on employees, suppliers, customers, and the environment, with equal regard to their shareholders. …

Altman and Brockman designed what they claimed was a middle way, a byzantine mishmash of the nonprofit and corporate worlds. It March 2019 they announced the creation of a “capped profit” company. This was a structure where any new investor would have to agree to limit on the returns they received from their investment. … To start with, the threshold was very high, which made it a terrific deal for those first investors: It came into play when profits were in excess of a one hundred times return. This meant that if an investor put $10 million into OpenAI, their profit would only get capped after their investment had led to $1 billion in returns.

Those would be huge returns, even for Silicon Valley. Altman says that one hundres times cap has since been reduced “in orders of magnitude” for subsequent investors, and he argues that those first backers were taking a huge risk.

비영리 OpenAI Inc.:

As part of the new, convoluted structure, Altman created an overarching nonprofit company called OpenAI Inc., with a board of directors who made sure that OpenAI LP (the capped-profit company) was building AGI that was “broadly beneficial.” The members of the board included Altman, Brockman, and Sutskever, along with Reid Hoffman, Quora CEO Adam D’Angelo, and a technology entrepreneur named Tasha McCauley.

모순:

Having pledged not to help “concentrate power” with its AI, it was now helping one of the world’s most powerful tech companies become more powerful. After promising to help other projects on the brink of AGI because that journey shouldn’t be “competitive,” it would instead spark a global arms race in which companies and developers would churn out AI systems more haphazardly than ever before to try to rival OpenAI. And as it clamped down on details of each new language model it prepared to release, OpenAI was closing itself off from outside scrutiny. Its name was a source of amusement among skeptical academics and worried AI researchers. …

He was no longer building AI for humanity but to help a large business remain dominant and take first place in a heated competition.

Chapter 11. Bound to Big Tech

숭고한 목표가 있으니 “사소한” 타협은 괜찮다는 합리화:

Many of (OpenAI’s) researchers didn’t believe their mission had been compromised. They had bought into the notion that the benefits of researching AGI outweighed any scruples about how they might get there. So long as they stuck to their all-important charter, it didn’t necessarily matter where the money was coming from.

다리오 아모데이의 걱정:

“We’re doing AI for humanity, but we’re also becoming a technology provider for a company that’s trying to maximize profit,” he pointed out to his coworkers, according to someone who heard his arguments. It didn’t add up.

아모데이의 퇴사, Anthropic 창업:

In the end he … decided to quit OpenAI, along with his sister Daniela and about half a dozen other researchers at the company. This wasn’t just a walkout over safety or the commercialization of AI, though. Even among the most hardcore worriers of AI, there was opportunism. … Amodei was witnessing the beginnings of a new boom in AI. He and his colleagues decided to start a new company called Anthropic, named after the philosophical term that refers to human existence, to underscore their prime concern for humanity. It would be a counterweight to OpenAI, just as OpenAI had been to DeepMind and Google, Of course, they also wanted to chase a business opportunity. …

Within a year Anthropic had raised another $580 million, mostly from the wealthy young founders of the crypto exchange FTX, who found their way to Amodei thanks to their shared interest in effective altruism. Ironically, two years after Amodei had complained about OpenAI’s commercial ties with Microsoft, he would take more than $6 billion in investment from Google and Amazon, aligning himself with both companies.

빅 테크의 반복되는 기만:

As Big Tech failed over and over again to responsibly govern itself, a sea change was happening. For years companies like Google, Facebook, and Apple had portrayed themselves as earnest pioneers of human progress. Apple was making products that “just worked.” Facebook was “connecting people.” Google was “organizing the world’s information.” But now Silicon Valley was dealing with a global backlash against its growing power. Facebook’s Cambridge Analytics scandal made people realize they were being used to sell ads. Critics accused Apple of hoarding more than $250 billion in cash offshore, untaxed, and limiting the lifespan of iPhones so that people would have to keep buying them. And behind the scenes at Google, researchers Timnit Gebru and Margaret Mitchell were starting to sound a warning about how language models could amplify prejudice.

Mustafa Suleyman:

As Alphabet’s new CEO Sundar Pichai worked on centralizing control of the conglomerate, he was also looking at how DeepMind could better support Google’s bottom line. … As he tightened Google’s grip of the AI lab, the relationship between Hassabis and Suleyman was also deteriorating. …

Suleyman had also developed a reputation at DeepMind for bullying, and several members of staff complained about harassment, according to a number of former employees. In late 2019, after an independent legal investigation, he was removed from his management roles.

Apparently untroubled by those allegations, Google then gave Suleyman the prestigious role of vice president of AI at its headquarters in Mountain View. …

At the Google mothership, Suleyman focused his attention on language models, a field that DeepMind had largely neglected even as OpenAI chased it aggressively. He worked with a team of Google engineers who were developing LaMDA, the company’s large language model project that was based on the transformer, and he also grew closer to well-connected Reid Hoffman. …

The angst Suelyman had felt about Big Tech was melting away, and his beliefs about the risks of corporate monopolies had shifted.

Chapter 12. Myth Busters

LLM의 편향 문제:

When you put the prompt “every man wonders…” into GPT-3, it would reply with “why he was born into this world and what his life is for.” When you typed “every woman wonders…,” its response was “what it would be like to be a man,” according to experiments published in March 2022 by writer and technology consultant Jenny Nicholson. …

According to OpenAI’s own research, GPT-3 also tended to use more negative words when talking about Black people, and when it talked about Islam, it was more likely to use words like violence, terrorism, and terrorist. …

More data meant the models sounded more fluent but also made it harder to track exactly what GPT-3 had learned, including the bad stuff.

노력에도 불구하고 완전한 해법은 없음. 게다가 노력을 충분히 하는 것 같지도 않음:

OpenAI did try to stop all that toxic content from poisoning its language models. … It would then use low-paid human contractors in developing countries like Kenya to test the model and flag any prompts that led it to harmful comments that might be racist or extremist. The method was called reinforcement learning by human feedbacks, or RLHF. …

But it’s still unclear how secure that system was or is today. In the summer of 2022, for instance, University of Exeter academic Stephane Baele wanted to test OpenAI’s new language model at generating propaganda. … Then Baele saw an email from OpenAI. The company had noticed all the extemist content he was generating and wanted to know what was going on. He replied taht he was doing academic research, expecting that he’d now have to go through a long process of providing evidence of his credentials. He didn’t. OpenAI never replied to ask for evidence that he was an academic. It just believed him.

확률적 앵무새 논문의 오리진 스토리.

점점 더 불투명해지는 OpenAI:

When it released GPT-2 a year later, OpenAI became vaguer. … Details of OpenAI’s training data became even murkier when it released GPT-3 in June 2020. …

Why? At the time, OpenAI said publicly that it didn’t want to give a set of instructions to bad actors - think propagandists and spammers. But keeping that data hidden also gave OpenAI a competitive advantage against other companies, like Google, Facebook, or now, Anthropic. If it also transpired that certain copyrighted books had been used to teach GPT-3, that could have hurt the company’s reputation and opened it up to lawsuits (which, sure enough, OpenAI is fighting now). …

규제 없는 경쟁:

Imagine if a pharmaceutical company released a new drug with no clinical trials and said it was testing the medication on the wider public. Or a food company released an experimental preservative with little scrutiny. That was how large tech firms were about to start deploying large language models to the public, because in their race to profit from such powerful tools, there were zero regulatory standards to follow. It was up to the safety and ethics researchers to study all the risks from inside these firms, but they were hardly a force to be reckoned with. At Google, their leaders had been fired. At DeepMind, they represented a tiny proportion of the research team. A signal was emerging more clearly each day. Get on board with the mission to build something bigger, or leave.

Act 4. The Race

Chapter 13. Hello, ChatGPT

Chapter 14. A Vague Sense of Doom

Chapter 15. Checkmate

Chapter 16. In the Shadow of Monopolies